5 research outputs found

    Interpretable Diabetic Retinopathy Diagnosis based on Biomarker Activation Map

    Full text link
    Deep learning classifiers provide the most accurate means of automatically diagnosing diabetic retinopathy (DR) based on optical coherence tomography (OCT) and its angiography (OCTA). The power of these models is attributable in part to the inclusion of hidden layers that provide the complexity required to achieve a desired task. However, hidden layers also render algorithm outputs difficult to interpret. Here we introduce a novel biomarker activation map (BAM) framework based on generative adversarial learning that allows clinicians to verify and understand classifiers decision-making. A data set including 456 macular scans were graded as non-referable or referable DR based on current clinical standards. A DR classifier that was used to evaluate our BAM was first trained based on this data set. The BAM generation framework was designed by combing two U-shaped generators to provide meaningful interpretability to this classifier. The main generator was trained to take referable scans as input and produce an output that would be classified by the classifier as non-referable. The BAM is then constructed as the difference image between the output and input of the main generator. To ensure that the BAM only highlights classifier-utilized biomarkers an assistant generator was trained to do the opposite, producing scans that would be classified as referable by the classifier from non-referable scans. The generated BAMs highlighted known pathologic features including nonperfusion area and retinal fluid. A fully interpretable classifier based on these highlights could help clinicians better utilize and verify automated DR diagnosis.Comment: 12 pages, 8 figure

    Deep-Learning–Aided Diagnosis of Diabetic Retinopathy, Age-Related Macular Degeneration, and Glaucoma Based on Structural and Angiographic OCT

    No full text
    Purpose: Timely diagnosis of eye diseases is paramount to obtaining the best treatment outcomes. OCT and OCT angiography (OCTA) have several advantages that lend themselves to early detection of ocular pathology; furthermore, the techniques produce large, feature-rich data volumes. However, the full clinical potential of both OCT and OCTA is stymied when complex data acquired using the techniques must be manually processed. Here, we propose an automated diagnostic framework based on structural OCT and OCTA data volumes that could substantially support the clinical application of these technologies. Design: Cross sectional study. Participants: Five hundred twenty-six OCT and OCTA volumes were scanned from the eyes of 91 healthy participants, 161 patients with diabetic retinopathy (DR), 95 patients with age-related macular degeneration (AMD), and 108 patients with glaucoma. Methods: The diagnosis framework was constructed based on semisequential 3-dimensional (3D) convolutional neural networks. The trained framework classifies combined structural OCT and OCTA scans as normal, DR, AMD, or glaucoma. Fivefold cross-validation was performed, with 60% of the data reserved for training, 20% for validation, and 20% for testing. The training, validation, and test data sets were independent, with no shared patients. For scans diagnosed as DR, AMD, or glaucoma, 3D class activation maps were generated to highlight subregions that were considered important by the framework for automated diagnosis. Main Outcome Measures: The area under the curve (AUC) of the receiver operating characteristic curve and quadratic-weighted kappa were used to quantify the diagnostic performance of the framework. Results: For the diagnosis of DR, the framework achieved an AUC of 0.95 ± 0.01. For the diagnosis of AMD, the framework achieved an AUC of 0.98 ± 0.01. For the diagnosis of glaucoma, the framework achieved an AUC of 0.91 ± 0.02. Conclusions: Deep learning frameworks can provide reliable, sensitive, interpretable, and fully automated diagnosis of eye diseases. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references
    corecore